Scalable Massively Parallel Artificial Neural Networks

نویسندگان

  • Lyle N. Long
  • Ankur Gupta
چکیده

Artificial Neural Networks (ANN) can be very effective for pattern recognition, function approximation, scientific classification, control, and the analysis of time series data; however they can require very large training times for large networks. Once the network is trained for a particular problem, however, it can produce results in a very short time. Traditional ANNs using back-propagation algorithm do not scale well as each neuron in one level is fully connected to each neuron in the previous level. In the present work only the neurons at the edges of the domains were involved in communication, in order to reduce the communication costs and maintain scalability. Ghost neurons were created at these processor boundaries for information communication. An object-oriented, massively-parallel ANN software package SPANN (Scalable Parallel Artificial Neural Network) has been developed and is described here. MPI was used to parallelize the C++ code. The back-propagation algorithm was used to train the network. In preliminary tests, the software was used to identify character sets consisting of 48 characters and with increasing resolutions. The code correctly identified all the characters when adequate training was used in the network. The training of a problem size with 2 billion neuron weights on an IBM BlueGene/L computer using 1000 dual PowerPC 440 processors required less than 30 minutes. Various comparisons in training time, forward propagation time, and error reduction were also made.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multi-Dimensional Self-Organizing Maps on Massively Parallel Hardware

Although available (sequential) computer hardware is very powerful nowadays, the implementation of artificial neural networks on massively parallel hardware is still undoubtedly of high interest, not only under an academic point of view. This paper presents an implementation of multi-dimensional Self-Organizing Maps on a scalable SIMD structure of a CNAPS computer with up to 512 parallel proces...

متن کامل

Massively parallel neural encoding and decoding of visual stimuli

The massively parallel nature of video Time Encoding Machines (TEMs) calls for scalable, massively parallel decoders that are implemented with neural components. The current generation of decoding algorithms is based on computing the pseudo-inverse of a matrix and does not satisfy these requirements. Here we consider video TEMs with an architecture built using Gabor receptive fields and a popul...

متن کامل

Kinematic Synthesis of Parallel Manipulator via Neural Network Approach

In this research, Artificial Neural Networks (ANNs) have been used as a powerful tool to solve the inverse kinematic equations of a parallel robot. For this purpose, we have developed the kinematic equations of a Tricept parallel kinematic mechanism with two rotational and one translational degrees of freedom (DoF). Using the analytical method, the inverse kinematic equations are solved for spe...

متن کامل

Toward Human-Level Massively-Parallel Neural Networks with Hodgkin-Huxley Neurons

This paper describes neural network algorithms and software that scale up to massively parallel computers. The neuron model used is the best available at this time, the Hodgkin-Huxley equations. Most massively parallel simulations use very simplified neuron models, which cannot accurately simulate biological neurons and the wide variety of neuron types. Using C++ and MPI we can scale these netw...

متن کامل

Rough Neural Networks: A Review

The rough neural networks (RNNs) are the neural networks based on rough set and one kind of hot research in the artificial intelligence in recent years, which synthesize the advantage of rough set to process uncertainly question: attributes reduce by none information losing then extract rule, and the neural networks have the strongly fault tolerance, self-organization, massively parallel proces...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • JACIC

دوره 5  شماره 

صفحات  -

تاریخ انتشار 2008